43 research outputs found

    Chaotic root-finding for a small class of polynomials

    Get PDF
    In this paper we present a new closed-form solution to a chaotic difference equation, yn+1=a2yn2+a1yn+a0y_{n+1} = a_2 y_{n}^2 + a_1 y_{n} + a_0 with coefficient a0=(a1−4)(a1+2)/(4a2)a_0 = (a_1 - 4)(a_1 + 2) / (4 a_2), and using this solution, show how corresponding exact roots to a special set of related polynomials of order 2p,p∈N2^p, p \in \mathbb{N} with two independent parameters can be generated, for any pp

    Simple and Nearly Optimal Polynomial Root-finding by Means of Root Radii Approximation

    Full text link
    We propose a new simple but nearly optimal algorithm for the approximation of all sufficiently well isolated complex roots and root clusters of a univariate polynomial. Quite typically the known root-finders at first compute some crude but reasonably good approximations to well-conditioned roots (that is, those isolated from the other roots) and then refine the approximations very fast, by using Boolean time which is nearly optimal, up to a polylogarithmic factor. By combining and extending some old root-finding techniques, the geometry of the complex plane, and randomized parametrization, we accelerate the initial stage of obtaining crude to all well-conditioned simple and multiple roots as well as isolated root clusters. Our algorithm performs this stage at a Boolean cost dominated by the nearly optimal cost of subsequent refinement of these approximations, which we can perform concurrently, with minimum processor communication and synchronization. Our techniques are quite simple and elementary; their power and application range may increase in their combination with the known efficient root-finding methods.Comment: 12 pages, 1 figur

    Properties of entangled photon pairs generated by a CW laser with small coherence time: theory and experiment

    Full text link
    The generation of entangled photon pairs by parametric down--conversion from solid state CW lasers with small coherence time is theoretically and experimentally analyzed. We consider a compact and low-cost setup based on a two-crystal scheme with Type-I phase matching. We study the effect of the pump coherence time over the entangled state visibility and over the violation of Bell's inequality, as a function of the crystals length. The full density matrix is reconstructed by quantum tomography. The proposed theoretical model is verified using a purification protocol based on a compensation crystal.Comment: 10 pages, 11 figure

    Array algorithms for H^2 and H^∞ estimation

    Get PDF
    Currently, the preferred method for implementing H^2 estimation algorithms is what is called the array form, and includes two main families: square-root array algorithms, that are typically more stable than conventional ones, and fast array algorithms, which, when the system is time-invariant, typically offer an order of magnitude reduction in the computational effort. Using our recent observation that H^∞ filtering coincides with Kalman filtering in Krein space, in this chapter we develop array algorithms for H^∞ filtering. These can be regarded as natural generalizations of their H^2 counterparts, and involve propagating the indefinite square roots of the quantities of interest. The H^∞ square-root and fast array algorithms both have the interesting feature that one does not need to explicitly check for the positivity conditions required for the existence of H^∞ filters. These conditions are built into the algorithms themselves so that an H^∞ estimator of the desired level exists if, and only if, the algorithms can be executed. However, since H^∞ square-root algorithms predominantly use J-unitary transformations, rather than the unitary transformations required in the H^2 case, further investigation is needed to determine the numerical behavior of such algorithms

    svdPPCS: an effective singular value decomposition-based method for conserved and divergent co-expression gene module identification

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Comparative analysis of gene expression profiling of multiple biological categories, such as different species of organisms or different kinds of tissue, promises to enhance the fundamental understanding of the universality as well as the specialization of mechanisms and related biological themes. Grouping genes with a similar expression pattern or exhibiting co-expression together is a starting point in understanding and analyzing gene expression data. In recent literature, gene module level analysis is advocated in order to understand biological network design and system behaviors in disease and life processes; however, practical difficulties often lie in the implementation of existing methods.</p> <p>Results</p> <p>Using the singular value decomposition (SVD) technique, we developed a new computational tool, named svdPPCS (<b>SVD</b>-based <b>P</b>attern <b>P</b>airing and <b>C</b>hart <b>S</b>plitting), to identify conserved and divergent co-expression modules of two sets of microarray experiments. In the proposed methods, gene modules are identified by splitting the two-way chart coordinated with a pair of left singular vectors factorized from the gene expression matrices of the two biological categories. Importantly, the cutoffs are determined by a data-driven algorithm using the well-defined statistic, SVD-p. The implementation was illustrated on two time series microarray data sets generated from the samples of accessory gland (ACG) and malpighian tubule (MT) tissues of the line W<sup>118 </sup>of <it>M. drosophila</it>. Two conserved modules and six divergent modules, each of which has a unique characteristic profile across tissue kinds and aging processes, were identified. The number of genes contained in these models ranged from five to a few hundred. Three to over a hundred GO terms were over-represented in individual modules with FDR < 0.1. One divergent module suggested the tissue-specific relationship between the expressions of mitochondrion-related genes and the aging process. This finding, together with others, may be of biological significance. The validity of the proposed SVD-based method was further verified by a simulation study, as well as the comparisons with regression analysis and cubic spline regression analysis plus PAM based clustering.</p> <p>Conclusions</p> <p>svdPPCS is a novel computational tool for the comparative analysis of transcriptional profiling. It especially fits the comparison of time series data of related organisms or different tissues of the same organism under equivalent or similar experimental conditions. The general scheme can be directly extended to the comparisons of multiple data sets. It also can be applied to the integration of data sets from different platforms and of different sources.</p

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5,6,7 vast areas of the tropics remain understudied.8,9,10,11 In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepresented in biodiversity databases.13,14,15 To worsen this situation, human-induced modifications16,17 may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost

    Gaussian elimination

    No full text
    As the standard method for solving systems of linear equations, Gaussian elimination (GE) is one of the most important and ubiquitous numerical algorithms. However, its successful use relies on understanding its numerical stability properties and how to organize its computations for efficient execution on modern computers. We give an overview of GE, ranging from theory to computation. We explain why GE computes an LU factorization and the various benefits of this matrix factorization viewpoint. Pivoting strategies for ensuring numerical stability are described. Special properties of GE for certain classes of structured matrices are summarized. How to implement GE in a way that efficiently exploits the hierarchical memories of modern computers is discussed. We also describe block LU factorization, corresponding to the use of pivot blocks instead of pivot elements, and explain how iterative refinement can be used to improve a solution computed by GE. Other topics are GE for sparse matrices and the role GE plays in the TOP500 ranking of the world’s fastest computers

    Gaussian elimination

    No full text
    corecore